Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 4.882
Filtrar
1.
Sci Rep ; 14(1): 7768, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38565548

RESUMO

Repeatability of measurements from image analytics is difficult, due to the heterogeneity and complexity of cell samples, exact microscope stage positioning, and slide thickness. We present a method to define and use a reference focal plane that provides repeatable measurements with very high accuracy, by relying on control beads as reference material and a convolutional neural network focused on the control bead images. Previously we defined a reference effective focal plane (REFP) based on the image gradient of bead edges and three specific bead image features. This paper both generalizes and improves on this previous work. First, we refine the definition of the REFP by fitting a cubic spline to describe the relationship between the distance from a bead's center and pixel intensity and by sharing information across experiments, exposures, and fields of view. Second, we remove our reliance on image features that behave differently from one instrument to another. Instead, we apply a convolutional regression neural network (ResNet 18) trained on cropped bead images that is generalizable to multiple microscopes. Our ResNet 18 network predicts the location of the REFP with only a single inferenced image acquisition that can be taken across a wide range of focal planes and exposure times. We illustrate the different strategies and hyperparameter optimization of the ResNet 18 to achieve a high prediction accuracy with an uncertainty for every image tested coming within the microscope repeatability measure of 7.5 µm from the desired focal plane. We demonstrate the generalizability of this methodology by applying it to two different optical systems and show that this level of accuracy can be achieved using only 6 beads per image.

2.
J Surg Case Rep ; 2024(4): rjae188, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38572284

RESUMO

The treatment of recurrent ovarian cancer has been based on systemic therapy. The role of secondary cytoreductive surgery has been addressed recently in several trials. Imaging plays a key role in helping the surgical team to decide which patients will have resectable disease and benefit from surgery. The role of staging laparoscopy and several imaging and clinical scores has been extensively debated in the field. In other surgical fields there have been reports of using 3D imaging software and 3D printed models to help surgeons better plan the surgical approach. To the best of our knowledge, we report the first case of a patient with recurrent ovarian cancer undergoing 3D modeling before secondary cytoreductive surgery. The 3D modeling was of most value to evaluate the extension of the disease in our patient who underwent a successful secondary cytoreductive surgery and is currently free of the disease.

3.
Hum Reprod ; 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600621

RESUMO

STUDY QUESTION: Can generative artificial intelligence (AI) models produce high-fidelity images of human blastocysts? SUMMARY ANSWER: Generative AI models exhibit the capability to generate high-fidelity human blastocyst images, thereby providing substantial training datasets crucial for the development of robust AI models. WHAT IS KNOWN ALREADY: The integration of AI into IVF procedures holds the potential to enhance objectivity and automate embryo selection for transfer. However, the effectiveness of AI is limited by data scarcity and ethical concerns related to patient data privacy. Generative adversarial networks (GAN) have emerged as a promising approach to alleviate data limitations by generating synthetic data that closely approximate real images. STUDY DESIGN, SIZE, DURATION: Blastocyst images were included as training data from a public dataset of time-lapse microscopy (TLM) videos (n = 136). A style-based GAN was fine-tuned as the generative model. PARTICIPANTS/MATERIALS, SETTING, METHODS: We curated a total of 972 blastocyst images as training data, where frames were captured within the time window of 110-120 h post-insemination at 1-h intervals from TLM videos. We configured the style-based GAN model with data augmentation (AUG) and pretrained weights (Pretrained-T: with translation equivariance; Pretrained-R: with translation and rotation equivariance) to compare their optimization on image synthesis. We then applied quantitative metrics including Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) to assess the quality and fidelity of the generated images. Subsequently, we evaluated qualitative performance by measuring the intelligence behavior of the model through the visual Turing test. To this end, 60 individuals with diverse backgrounds and expertise in clinical embryology and IVF evaluated the quality of synthetic embryo images. MAIN RESULTS AND THE ROLE OF CHANCE: During the training process, we observed consistent improvement of image quality that was measured by FID and KID scores. Pretrained and AUG + Pretrained initiated with remarkably lower FID and KID values compared to both Baseline and AUG + Baseline models. Following 5000 training iterations, the AUG + Pretrained-R model showed the highest performance of the evaluated five configurations with FID and KID scores of 15.2 and 0.004, respectively. Subsequently, we carried out the visual Turing test, such that IVF embryologists, IVF laboratory technicians, and non-experts evaluated the synthetic blastocyst-stage embryo images and obtained similar performance in specificity with marginal differences in accuracy and sensitivity. LIMITATIONS, REASONS FOR CAUTION: In this study, we primarily focused the training data on blastocyst images as IVF embryos are primarily assessed in blastocyst stage. However, generation of an array of images in different preimplantation stages offers further insights into the development of preimplantation embryos and IVF success. In addition, we resized training images to a resolution of 256 × 256 pixels to moderate the computational costs of training the style-based GAN models. Further research is needed to involve a more extensive and diverse dataset from the formation of the zygote to the blastocyst stage, e.g. video generation, and the use of improved image resolution to facilitate the development of comprehensive AI algorithms and to produce higher-quality images. WIDER IMPLICATIONS OF THE FINDINGS: Generative AI models hold promising potential in generating high-fidelity human blastocyst images, which allows the development of robust AI models as it can provide sufficient training datasets while safeguarding patient data privacy. Additionally, this may help to produce sufficient embryo imaging training data with different (rare) abnormal features, such as embryonic arrest, tripolar cell division to avoid class imbalances and reach to even datasets. Thus, generative models may offer a compelling opportunity to transform embryo selection procedures and substantially enhance IVF outcomes. STUDY FUNDING/COMPETING INTEREST(S): This study was supported by a Horizon 2020 innovation grant (ERIN, grant no. EU952516) and a Horizon Europe grant (NESTOR, grant no. 101120075) of the European Commission to A.S. and M.Z.E., the Estonian Research Council (grant no. PRG1076) to A.S., and the EVA (Erfelijkheid Voortplanting & Aanleg) specialty program (grant no. KP111513) of Maastricht University Medical Centre (MUMC+) to M.Z.E. TRIAL REGISTRATION NUMBER: Not applicable.

4.
Small ; : e2401238, 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38602230

RESUMO

Multifunctional devices integrated with electrochromic and supercapacitance properties are fascinating because of their extensive usage in modern electronic applications. In this work, vanadium-doped cobalt chloride carbonate hydroxide hydrate nanostructures (V-C3H NSs) are successfully synthesized and show unique electrochromic and supercapacitor properties. The V-C3H NSs material exhibits a high specific capacitance of 1219.9 F g-1 at 1 mV s-1 with a capacitance retention of 100% over 30 000 CV cycles. The electrochromic performance of the V-C3H NSs material is confirmed through in situ spectroelectrochemical measurements, where the switching time, coloration efficiency (CE), and optical modulation (∆T) are found to be 15.7 and 18.8 s, 65.85 cm2 C-1 and 69%, respectively. A coupled multilayer artificial neural network (ANN) model is framed to predict potential and current from red (R), green (G), and blue (B) color values. The optimized V-C3H NSs are used as the active materials in the fabrication of flexible/wearable electrochromic micro-supercapacitor devices (FEMSDs) through a cost-effective mask-assisted vacuum filtration method. The fabricated FEMSD exhibits an areal capacitance of 47.15 mF cm-2 at 1 mV s-1 and offers a maximum areal energy and power density of 104.78 Wh cm-2 and 0.04 mW cm-2, respectively. This material's interesting energy storage and electrochromic properties are promising in multifunctional electrochromic energy storage applications.

5.
Oral Radiol ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589600

RESUMO

OBJECTIVES: To evaluate the feasibility of using the pulp volume (Pv) to total volume (Tv) ratio (Pv:Tv), obtained from cone beam computed tomography (CBCT) scans of single-rooted teeth, for age estimation in a Brazilian population sample. METHODS: After obtaining approval from the ethics committee, the study commenced by applying inclusion criteria to screen CBCT scans, resulting in a probability-based sample of participants aged 18 years and older (ranging from 18 to 82 years, with a mean age of 46.44 years). A total of 517 single-rooted teeth, including maxillary central incisors (CI), mandibular canines (C), and mandibular first premolars (FP), were chosen based on excellent agreement values (> 0.9). Pv and Tv measurements were conducted using semi-automatic segmentation with ITK-SNAP 3.8 software. Statistical analysis was performed using Jamovi software, with a significance level set at 5% (α = 0.05). RESULTS: A strong negative correlation (r > -0.7) was observed between chronological age and the Pv:Tv ratio across all examined teeth. However, when conducting regression analysis with Pv:Tv data and chronological age as the independent variable, only the mandibular FP teeth exhibited a normal distribution. The resulting linear model demonstrated moderate predictive value (approximately 64%) in explaining the variance in chronological age, but caution should be exercised when interpreting these findings. CONCLUSIONS: The method of measuring individual tooth volume using CBCT to estimate chronological age via Pv:Tv has been demonstrated as effective and reproducible within the Brazilian population sample.

6.
Ann Nucl Med ; 2024 Apr 08.
Artigo em Inglês | MEDLINE | ID: mdl-38589677

RESUMO

OBJECTIVE: We developed a deep learning model for distinguishing radiation therapy (RT)-related changes and tumour recurrence in patients with lung cancer who underwent RT, and evaluated its performance. METHODS: We retrospectively recruited 308 patients with lung cancer with RT-related changes observed on 18F-fluorodeoxyglucose positron emission tomography-computed tomography (18F-FDG PET/CT) performed after RT. Patients were labelled as positive or negative for tumour recurrence through histologic diagnosis or clinical follow-up after 18F-FDG PET/CT. A two-dimensional (2D) slice-based convolutional neural network (CNN) model was created with a total of 3329 slices as input, and performance was evaluated with five independent test sets. RESULTS: For the five independent test sets, the area under the curve (AUC) of the receiver operating characteristic curve, sensitivity, and specificity were in the range of 0.98-0.99, 95-98%, and 87-95%, respectively. The region determined by the model was confirmed as an actual recurred tumour through the explainable artificial intelligence (AI) using gradient-weighted class activation mapping (Grad-CAM). CONCLUSION: The 2D slice-based CNN model using 18F-FDG PET imaging was able to distinguish well between RT-related changes and tumour recurrence in patients with lung cancer.

7.
Data Brief ; 54: 110334, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38586139

RESUMO

The Burkholderia glumae bacterium causes bacterial grain rot in rice, posing significant threats to the crop's yield, particularly thriving during the rice flowering and grain filling stages. This disease is especially evident in rice grains before harvest, presenting challenges in the detection and classification of rice panicles. Firstly, diseased grains may mix with healthy ones, complicating their separation. Secondly, the size of grains on a panicle varies from small to large, which can be problematic when detected using object detection methods. Thirdly, disease classification can be conducted by evaluating the extent of infection on rice panicles to assess its impact on yield. Finally, the challenges in detection, classification, and preprocessing for disease identification and management necessitate the adoption of diverse approaches in machine learning and deep learning to develop optimal methods and support smart agriculture.

8.
Oral Radiol ; 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38625432

RESUMO

OBJECTIVE: This study aimed to evaluate the usability of morphometric features obtained from mandibular panoramic radiographs in gender determination using machine learning algorithms. MATERIALS AND METHODS: High-resolution radiographs of 200 patients aged 20-77 (41.0 ± 12.7) were included in the study. Twelve different morphometric measurements were extracted from each digital panoramic radiography included in the study. These measurements were used as features in the machine learning phase in which six different machine learning algorithms were used (k-nearest neighbor, decision trees, support vector machines, naive Bayes, linear discrimination analysis, and neural networks). To evaluate the reliability, we have performed tenfold cross-validation and we repeated this 10 times for every classification process. This process enhances the reliability of the results for other datasets. RESULTS: When all 12 features are used together, the accuracy rate is found to be 82.6 ± 0.5%. The classification accuracies are also compared using each feature alone. Three features that give the highest accuracy are coronoid height (80.9 ± 0.9%), condyle height (78.2 ± 0.5%), and ramus height (77.2 ± 0.4%), respectively. When compared to the classification algorithms, the highest accuracy was obtained with the naive Bayes algorithm with a rate of 84.0 ± 0.4%. CONCLUSION: Machine learning techniques can accurately determine gender by analyzing mandibular morphometric structures from digital panoramic radiographs. The most precise results are achieved by evaluating the structures in combination, using attributes obtained from applying the MRMR algorithm to all features.

9.
J Microsc ; 294(2): 233-238, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38576376

RESUMO

The performance of cementitious composites reinforced with fibres or/and bars depends on the bond strength between inclusion and cementitious matrix. The nature of formation of fibre-matrix bond is crucial for enhancing the reliability and utilisation of reinforced composites. The research provides a review on the recently published result about the changes in the microstructure of cement matrix surrounding steel fibres with different surface roughness, using a scanning electron microscope (SEM) coupled with k-means clustering algorithm for image segmentation. The debonding pattern of the fibre-matrix bond after the tensile loading cycles was discussed by observing the amount of adhered cement paste to the pulled out fibre surface with SEM. Therefore, analysis of SEM images enabled to explain the connection between the micro-scale properties of cement paste and fibre after application of cyclic loading.

10.
Front Mol Biosci ; 11: 1346242, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38567100

RESUMO

Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.

11.
J Med Radiat Sci ; 2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38571377

RESUMO

INTRODUCTION: Breast cancer (BC), the most frequently diagnosed malignancy among women worldwide, presents a public health challenge and affects mortality rates. Breast-conserving therapy (BCT) is a common treatment, but the risk from residual disease necessitates radiotherapy. Digital mammography monitors treatment response by identifying post-operative and radiotherapy tissue alterations, but accurate assessment of mammographic density remains a challenge. This study used OpenBreast to measure percent density (PD), offering insights into changes in mammographic density before and after BCT with radiation therapy. METHODS: This retrospective analysis included 92 female patients with BC who underwent BCT, chemotherapy, and radiotherapy, excluding those who received hormonal therapy or bilateral BCT. Percent/percentage density measurements were extracted using OpenBreast, an automated software that applies computational techniques to density analyses. Data were analysed at baseline, 3 months, and 15 months post-treatment using standardised mean difference (SMD) with Cohen's d, chi-square, and paired sample t-tests. The predictive power of PD changes for BC was measured based on the receiver operating characteristic (ROC) curve analysis. RESULTS: The mean age was 53.2 years. There were no significant differences in PD between the periods. Standardised mean difference analysis revealed no significant changes in the SMD for PD before treatment compared with 3- and 15-months post-treatment. Although PD increased numerically after radiotherapy, ROC analysis revealed optimal sensitivity at 15 months post-treatment for detecting changes in breast density. CONCLUSIONS: This study utilised an automated breast density segmentation tool to assess the changes in mammographic density before and after BC treatment. No significant differences in the density were observed during the short-term follow-up period. However, the results suggest that quantitative density assessment could be valuable for long-term monitoring of treatment effects. The study underscores the necessity for larger and longitudinal studies to accurately measure and validate the effectiveness of quantitative methods in clinical BC management.

12.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610257

RESUMO

Images obtained in an unfavorable environment may be affected by haze or fog, leading to fuzzy image details, low contrast, and loss of important information. Recently, significant progress has been achieved in the realm of image dehazing, largely due to the adoption of deep learning techniques. Owing to the lack of modules specifically designed to learn the unique characteristics of haze, existing deep neural network-based methods are impractical for processing images containing haze. In addition, most networks primarily focus on learning clear image information while disregarding potential features in hazy images. To address these limitations, we propose an innovative method called contrastive multiscale transformer for image dehazing (CMT-Net). This method uses the multiscale transformer to enable the network to learn global hazy features at multiple scales. Furthermore, we introduce feature combination attention and a haze-aware module to enhance the network's ability to handle varying concentrations of haze by assigning more weight to regions containing haze. Finally, we design a multistage contrastive learning loss incorporating different positive and negative samples at various stages to guide the network's learning process to restore real and non-hazy images. The experimental findings demonstrate that CMT-Net provides exceptional performance on established datasets and exhibits superior visual outcomes.

13.
Materials (Basel) ; 17(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38612094

RESUMO

The accurate online detection of laser welding penetration depth has been a critical problem to which the industry has paid the most attention. Aiming at the laser welding process of TC4 titanium alloy, a multi-sensor monitoring system that obtained the keyhole/molten pool images and laser-induced plasma spectrum was built. The influences of laser power on the keyhole/molten pool morphologies and plasma thermo-mechanical characteristics were investigated. The results showed that there were significant correlations among the variations of the keyhole-molten pool, plasma spectrum, and penetration depth. The image features and spectral features were extracted by image processing and dimension-reduction methods, respectively. Moreover, several penetration depth prediction models based on single-sensor features and multi-sensor features were established. The mean square error of the neural network model built by multi-sensor features was 0.0162, which was smaller than that of the model built by single-sensor features. The established high-precision model provided a theoretical basis for real-time feedback control of the penetration depth in the laser welding process.

14.
Food Sci Nutr ; 12(4): 2874-2885, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38628193

RESUMO

Intelligent electrospun pH indicators were produced from bio-nanocomposite kafirin-polyethylene oxide (PEO) containing red beetroot extract. The aim was to evaluate the performance and stability of the electrospun pH indicators via image processing. Red beetroot extract was added to a mixture of kafirin and PEO at various concentrations. The mixtures were electrospun, and infrared Fourier transform spectroscopy confirmed the presence of kafirin, PEO, and red beetroot extract in the resulting pH indicator. The results showed that the pH indicators had high stability and reversibility at different temperatures, pHs, and environmental conditions. The results showed that the color of the indicators was significantly reversible after pH changes, with highly desirable reversibility observed at pH values of 1, 3, 4, 5, 7, 9, and 10. The findings proved that the red beetroot extract loaded bio-nanocomposite pH indicator accompanied by evaluation of color characteristics through image processing technique, can serve as a time-efficient, accurate tool for detecting and tracking pH changes caused by food spoilage.

15.
J Nucl Med ; 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637141

RESUMO

With the development of new radiopharmaceutical therapies, quantitative SPECT/CT has progressively emerged as a crucial tool for dosimetry. One major obstacle of SPECT is its poor resolution, which results in blurring of the activity distribution. Especially for small objects, this so-called partial-volume effect limits the accuracy of activity quantification. Numerous methods for partial-volume correction (PVC) have been proposed, but most methods have the disadvantage of assuming a spatially invariant resolution of the imaging system, which does not hold for SPECT. Furthermore, most methods require a segmentation based on anatomic information. Methods: We introduce DL-PVC, a methodology for PVC of 177Lu SPECT/CT imaging using deep learning (DL). Training was based on a dataset of 10,000 random activity distributions placed in extended cardiac-torso body phantoms. Realistic SPECT acquisitions were created using the SIMIND Monte Carlo simulation program. SPECT reconstructions without and with resolution modeling were performed using the CASToR and STIR reconstruction software, respectively. The pairs of ground-truth activity distributions and simulated SPECT images were used for training various U-Nets. Quantitative analysis of the performance of these U-Nets was based on metrics such as the structural similarity index measure or normalized root-mean-square error, but also on volume activity accuracy, a new metric that describes the fraction of voxels in which the determined activity concentration deviates from the true activity concentration by less than a certain margin. On the basis of this analysis, the optimal parameters for normalization, input size, and network architecture were identified. Results: Our simulation-based analysis revealed that DL-PVC (0.95/7.8%/35.8% for structural similarity index measure/normalized root-mean-square error/volume activity accuracy) outperforms SPECT without PVC (0.89/10.4%/12.1%) and after iterative Yang PVC (0.94/8.6%/15.1%). Additionally, we validated DL-PVC on 177Lu SPECT/CT measurements of 3-dimensionally printed phantoms of different geometries. Although DL-PVC showed activity recovery similar to that of the iterative Yang method, no segmentation was required. In addition, DL-PVC was able to correct other image artifacts such as Gibbs ringing, making it clearly superior at the voxel level. Conclusion: In this work, we demonstrate the added value of DL-PVC for quantitative 177Lu SPECT/CT. Our analysis validates the functionality of DL-PVC and paves the way for future deployment on clinical image data.

16.
Sensors (Basel) ; 24(7)2024 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-38610545

RESUMO

The degradation of road pavements due to environmental factors is a pressing issue in infrastructure maintenance, necessitating precise identification of pavement distresses. The pavement condition index (PCI) serves as a critical metric for evaluating pavement conditions, essential for effective budget allocation and performance tracking. Traditional manual PCI assessment methods are limited by labor intensity, subjectivity, and susceptibility to human error. Addressing these challenges, this paper presents a novel, end-to-end automated method for PCI calculation, integrating deep learning and image processing technologies. The first stage employs a deep learning algorithm for accurate detection of pavement cracks, followed by the application of a segmentation-based skeleton algorithm in image processing to estimate crack width precisely. This integrated approach enhances the assessment process, providing a more comprehensive evaluation of pavement integrity. The validation results demonstrate a 95% accuracy in crack detection and 90% accuracy in crack width estimation. Leveraging these results, the automated PCI rating is achieved, aligned with standards, showcasing significant improvements in the efficiency and reliability of PCI evaluations. This method offers advancements in pavement maintenance strategies and potential applications in broader road infrastructure management.

17.
Curr Med Imaging ; 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38616747

RESUMO

BACKGROUND: With the in-depth development of assistive treatment devices, the application of artificial knee joints in the rehabilitation of amputees is becoming increasingly mature. The length of residual limbs and muscle strength of patients have individual differences, and the current artificial knee joint lacks certain adaptability in the personalized rehabilitation of patients. PURPOSE: In order to deeply analyze the impact of different types of artificial knee joints on the walking function of unilateral thigh amputees, improve the performance of artificial knee joints, and enhance the rehabilitation effect of patients, this article combines image processing technology to conduct in-depth research on the walking gait analysis of different artificial knee joints of unilateral thigh amputees. METHODS: This article divides patients into two groups: the experimental group consists of patients with single leg amputation, and the control group consists of patients with different prostheses. An image processing system is constructed using universal video and computer hardware, and relevant technologies are used to recognize and track landmarks; Furthermore, image processing technology was used to analyze the gait of different groups of patients. Finally, by analyzing the different psychological reactions of amputees, corresponding treatment plans were developed. RESULTS: Different prostheses worn by amputees have brought varying degrees of convenience to life to a certain extent. The walking stability of wearing hydraulic single axis prosthetic joints is only 79%, and the gait elegance is relatively low. The walking stability of wearing intelligent artificial joints is as high as 96%. Elegant gait is basically in good condition. CONCLUSION: Image processing technology helps doctors and rehabilitation practitioners better understand the gait characteristics and rehabilitation progress of patients wearing different artificial knee joints, providing objective basis for personalized rehabilitation of patients.

18.
Curr Med Imaging ; 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38616750

RESUMO

BACKGROUND: PET scan stands as a valuable diagnostic tool in nuclear medicine, enabling the observation of metabolic and physiological changes at a molecular level. However, PET scans have a number of drawbacks, such as poor spatial resolution, noisy images, scattered radiation, artifacts, and radiation exposure. These challenges demonstrate the need for optimization in image processing techniques. OBJECTIVES: Our objective is to identify the evolving trends and impacts of publication in this field, as well as the most productive and influential countries, institutions, authors, themes, and articles. METHODS: A bibliometric study was conducted using a comprehensive query string such as "positron emission tomography" AND "image processing" AND optimization to retrieve 1,783 publications from 1981 to 2022 found in the Scopus database related to this field of study. RESULTS: The findings revealed that the most influential country, institution, and authors are from the USA, and the most prevalent theme is TOF PET image reconstruction. CONCLUSION: The increasing trend in publication in the field of optimization of image processing in PET scans would address the challenges in PET scan by reducing radiation exposure, faster scanning speed, as well as enhancing lesion identification.

19.
J Biomed Opt ; 29(Suppl 2): S22706, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38638450

RESUMO

Significance: Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms. Aim: We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images. Approach: The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x-y plane. Leveraging cell continuity in consecutive z-stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process. Results: The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools. Conclusion: Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , 60704 , Algoritmos , Imagem Óptica
20.
J Imaging Inform Med ; 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622385

RESUMO

Convolutional neural networks (CNN) have been used for a wide variety of deep learning applications, especially in computer vision. For medical image processing, researchers have identified certain challenges associated with CNNs. These challenges encompass the generation of less informative features, limitations in capturing both high and low-frequency information within feature maps, and the computational cost incurred when enhancing receptive fields by deepening the network. Transformers have emerged as an approach aiming to address and overcome these specific limitations of CNNs in the context of medical image analysis. Preservation of all spatial details of medical images is necessary to ensure accurate patient diagnosis. Hence, this research introduced the use of a pure Vision Transformer (ViT) for a denoising artificial neural network for medical image processing specifically for low-dose computed tomography (LDCT) image denoising. The proposed model follows a U-Net framework that contains ViT modules with the integration of Noise2Neighbor (N2N) interpolation operation. Five different datasets containing LDCT and normal-dose CT (NDCT) image pairs were used to carry out this experiment. To test the efficacy of the proposed model, this experiment includes comparisons between the quantitative and visual results among CNN-based (BM3D, RED-CNN, DRL-E-MP), hybrid CNN-ViT-based (TED-Net), and the proposed pure ViT-based denoising model. The findings of this study showed that there is about 15-20% increase in SSIM and PSNR when using self-attention transformers than using the typical pure CNN. Visual results also showed improvements especially when it comes to showing fine structural details of CT images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...